112 research outputs found

    Measuring named data networks

    Get PDF
    2020 Spring.Includes bibliographical references.Named Data Networking (NDN) is a promising information-centric networking (ICN) Internet architecture that addresses the content directly rather than addressing servers. NDN provides new features, such as content-centric security, stateful forwarding, and in-network caches, to better satisfy the needs of today's applications. After many years of technological research and experimentation, the community has started to explore the deployment path for NDN. One NDN deployment challenge is measurement. Unlike IP, which has a suite of measurement approaches and tools, NDN only has a few achievements. NDN routing and forwarding are based on name prefixes that do not refer to individual endpoints. While rich NDN functionalities facilitate data distribution, they also break the traditional end-to-end probing based measurement methods. In this dissertation, we present our work to investigate NDN measurements and fill some research gaps in the field. Our thesis of this dissertation states that we can capture a substantial amount of useful and actionable measurements of NDN networks from end hosts. We start by comparing IP and NDN to propose a conceptual framework for NDN measurements. We claim that NDN can be seen as a superset of IP. NDN supports similar functionalities provided by IP, but it has unique features to facilitate data retrieval. The framework helps identify that NDN lacks measurements in various aspects. This dissertation focuses on investigating the active measurements from end hosts. We present our studies in two directions to support the thesis statement. We first present the study to leverage the similarities to replicate IP approaches in NDN networks. We show the first work to measure the NDN-DPDK forwarder, a high-speed NDN forwarder designed and implemented by the National Institute of Standards and Technology (NIST), in a real testbed. The results demonstrate that Data payload sizes dominate the forwarding performance, and efficiently using every fragment to improve the goodput. We then present the first work to replicate packet dispersion techniques in NDN networks. Based on the findings in the NDN-DPDK forwarder benchmark, we devise the techniques to measure interarrivals for Data packets. The results show that the techniques successfully estimate the capacity on end hosts when 1Gbps network cards are used. Our measurements also indicate the NDN-DPDK forwarder introduces variance in Data packet interarrivals. We identify the potential bottlenecks and the possible causes of the variance. We then address the NDN specific measurements, measuring the caching state in NDN networks from end hosts. We propose a novel method to extract fingerprints for various caching decision mechanisms. Our simulation results demonstrate that the method can detect caching decisions in a few rounds. We also show that the method is not sensitive to cross-traffic and can be deployed on real topologies for caching policy detection

    On the Trustworthiness Landscape of State-of-the-art Generative Models: A Comprehensive Survey

    Full text link
    Diffusion models and large language models have emerged as leading-edge generative models and have sparked a revolutionary impact on various aspects of human life. However, the practical implementation of these models has also exposed inherent risks, highlighting their dual nature and raising concerns regarding their trustworthiness. Despite the abundance of literature on this subject, a comprehensive survey specifically delving into the intersection of large-scale generative models and their trustworthiness remains largely absent. To bridge this gap, This paper investigates both the long-standing and emerging threats associated with these models across four fundamental dimensions: privacy, security, fairness, and responsibility. In this way, we construct an extensive map outlining the trustworthiness of these models, while also providing practical recommendations and identifying future directions. These efforts are crucial for promoting the trustworthy deployment of these models, ultimately benefiting society as a whole.Comment: draft versio

    On the Robustness of Split Learning against Adversarial Attacks

    Full text link
    Split learning enables collaborative deep learning model training while preserving data privacy and model security by avoiding direct sharing of raw data and model details (i.e., sever and clients only hold partial sub-networks and exchange intermediate computations). However, existing research has mainly focused on examining its reliability for privacy protection, with little investigation into model security. Specifically, by exploring full models, attackers can launch adversarial attacks, and split learning can mitigate this severe threat by only disclosing part of models to untrusted servers.This paper aims to evaluate the robustness of split learning against adversarial attacks, particularly in the most challenging setting where untrusted servers only have access to the intermediate layers of the model.Existing adversarial attacks mostly focus on the centralized setting instead of the collaborative setting, thus, to better evaluate the robustness of split learning, we develop a tailored attack called SPADV, which comprises two stages: 1) shadow model training that addresses the issue of lacking part of the model and 2) local adversarial attack that produces adversarial examples to evaluate.The first stage only requires a few unlabeled non-IID data, and, in the second stage, SPADV perturbs the intermediate output of natural samples to craft the adversarial ones. The overall cost of the proposed attack process is relatively low, yet the empirical attack effectiveness is significantly high, demonstrating the surprising vulnerability of split learning to adversarial attacks.Comment: accepted by ECAI 2023, camera-ready versio

    Variation of Korotkoff stethoscope sounds during blood pressure measurement: Analysis using a convolutional neural network

    Get PDF
    Korotkoff sounds are known to change their characteristics during blood pressure (BP) measurement, resulting in some uncertainties for systolic and diastolic pressure (SBP and DBP) determinations. The aim of this study was to assess the variation of Korotkoff sounds during BP measurement by examining all stethoscope sounds associated with each heartbeat from above systole to below diastole during linear cuff deflation. Three repeat BP measurements were taken from 140 healthy subjects (age 21 to 73 years; 62 female and 78 male) by a trained observer, giving 420 measurements. During the BP measurements, the cuff pressure and stethoscope signals were simultaneously recorded digitally to a computer for subsequent analysis. Heart beats were identified from the oscillometric cuff pressure pulses. The presence of each beat was used to create a time window (1s, 2000 samples) centered on the oscillometric pulse peak for extracting beat-by-beat stethoscope sounds. A time-frequency two-dimensional matrix was obtained for the stethoscope sounds associated with each beat, and all beats between the manually determined SBPs and DBPs were labeled as ‘Korotkoff’. A convolutional neural network was then used to analyze consistency in sound patterns that were associated with Korotkoff sounds. A 10-fold cross-validation strategy was applied to the stethoscope sounds from all 140 subjects, with the data from ten groups of 14 subjects being analysed separately, allowing consistency to be evaluated between groups. Next, within-subject variation of the Korotkoff sounds analysed from the three repeats was quantified, separately for each stethoscope sound beat. There was consistency between folds with no significant differences between groups of 14 subjects (P = 0.09 to P = 0.62). Our results showed that 80.7% beats at SBP and 69.5% at DBP were analysed as Korotkoff sounds, with significant differences between adjacent beats at systole (13.1%, P = 0.001) and diastole (17.4%, P < 0.001). Results reached stability for SBP (97.8%, at 6th beats below SBP) and DBP (98.1%, at 6th beat above DBP) with no significant differences between adjacent beats (SBP P = 0.74; DBP P = 0.88). There were no significant differences at high cuff pressures, but at low pressures close to diastole there was a small difference (3.3%, P = 0.02). In addition, greater within subject variability was observed at SBP (21.4%) and DBP (28.9%), with a significant difference between both (P < 0.02). In conclusion, this study has demonstrated that Korotkoff sounds can be consistently identified during the period below SBP and above DBP, but that at systole and diastole there can be substantial variations that are associated with high variation in the three repeat measurements in each subject

    Managing scientific data with named data networking

    Get PDF
    Many scientific domains, such as climate science and High Energy Physics (HEP), have data management requirements that are not well supported by the IP network architecture. Named Data Networking (NDN) is a new network architecture whose service model is better aligned with the needs of data-oriented applications. NDN provides features such as best-location retrieval, caching, load sharing, and transparent failover that would otherwise be painstakingly (re-)implemented by each application using point-to-point semantics in an IP network. We present the first scientific data management application designed and implemented on top of NDN. We use this application to manage climate and HEP data over a dedicated, high-performance, testbed. Our application has two main components: a UI for dataset discovery queries and a federation of synchronized name catalogs. We show how NDN primitives can be used to implement common data management operations such as publishing, search, efficient retrieval, and publication access control

    Refiner: Data Refining against Gradient Leakage Attacks in Federated Learning

    Full text link
    Recent works have brought attention to the vulnerability of Federated Learning (FL) systems to gradient leakage attacks. Such attacks exploit clients' uploaded gradients to reconstruct their sensitive data, thereby compromising the privacy protection capability of FL. In response, various defense mechanisms have been proposed to mitigate this threat by manipulating the uploaded gradients. Unfortunately, empirical evaluations have demonstrated limited resilience of these defenses against sophisticated attacks, indicating an urgent need for more effective defenses. In this paper, we explore a novel defensive paradigm that departs from conventional gradient perturbation approaches and instead focuses on the construction of robust data. Intuitively, if robust data exhibits low semantic similarity with clients' raw data, the gradients associated with robust data can effectively obfuscate attackers. To this end, we design Refiner that jointly optimizes two metrics for privacy protection and performance maintenance. The utility metric is designed to promote consistency between the gradients of key parameters associated with robust data and those derived from clients' data, thus maintaining model performance. Furthermore, the privacy metric guides the generation of robust data towards enlarging the semantic gap with clients' data. Theoretical analysis supports the effectiveness of Refiner, and empirical evaluations on multiple benchmark datasets demonstrate the superior defense effectiveness of Refiner at defending against state-of-the-art attacks.Comment: under revie

    Laser Beam Propagation through Oceanic Turbulence

    Get PDF
    Using a recently proposed model for the refractive index fluctuations in oceanic turbulence, optical beam propagation through seawater is explored. The model provides an accurate depiction of the ocean through the inclusion of both temperature and salinity fluctuations to the refractive index. Several important statistical characteristics are explored including spatial coherence radius, angle-of-arrival fluctuations, and beam wander. Theoretical values of these parameters are found based on weak fluctuation theory using the Rytov method. The results presented serve as a foundation for the study of optical beam propagation in oceanic turbulence, which may provide an important support for further researches in applications for underwater communicating, imaging, and sensing systems

    Cryptosystems Resilient to Both Continual Key Leakages and Leakages from Hash Functions

    Get PDF
    Yoneyama et al. introduced Leaky Random Oracle Model (LROM for short) at ProvSec2008 in order to discuss security (or insecurity) of cryptographic schemes which use hash functions as building blocks when leakages from pairs of input and output of hash functions occur. This kind of leakages occurs due to various attacks caused by sloppy usage or implementation. Their results showed that this kind of leakages may threaten the security of some cryptographic schemes. However, an important fact is that such attacks would leak not only pairs of input and output of hash functions, but also the secret key. Therefore, LROM is rather limited in the sense that it considers leakages from pairs of input and output of hash functions alone, instead of taking into consideration other possible leakages from the secret key simultaneously. On the other hand, many other leakage models mainly concentrate on leakages from the secret key and ignore leakages from hash functions for a cryptographic scheme exploiting hash functions in these leakage models. Some examples show that the above drawbacks of LROM and other leakage models may cause insecurity of some schemes which are secure in the two kinds of leakage model. In this paper, we present an augmented model of both LROM and some leakage models, which both the secret key and pairs of input and output of hash functions can be leaked. Furthermore, the secret key can be leaked continually during the whole life cycle of a cryptographic scheme. Hence, our new model is more universal and stronger than LROM and some leakage models (e.g. only computation leaks model and bounded memory leakage model). As an application example, we also present a public key encryption scheme which is provably IND-CCA secure in our new model
    • …
    corecore